- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
0003100000000000
- More
- Availability
-
40
- Author / Contributor
- Filter by Author / Creator
-
-
Hong, Junyuan (4)
-
Li, Bo (4)
-
Li, Zhangheng (4)
-
Wang, Zhangyang (4)
-
Zhang, Chenhui (4)
-
Diffenderfer, James (3)
-
Duan, Jinhao (3)
-
Hendrycks, Dan (3)
-
Kailkhura, Bhavya (3)
-
Lieberman, Kelsey (3)
-
Song, Dawn (3)
-
Xie, Chulin (3)
-
Xu, Kaidi (3)
-
Bartoldson, Brian (2)
-
Jaiswal, Ajay (2)
-
Bartoldson, Brian R (1)
-
Jaiswal, Ajay Kumar (1)
-
Wang, Jiachen (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Hong, Junyuan; Duan, Jinhao; Zhang, Chenhui; Li, Zhangheng; Xie, Chulin; Lieberman, Kelsey; Diffenderfer, James; Bartoldson, Brian; Jaiswal, Ajay; Xu, Kaidi; et al (, International Conference on Machine Learning (ICML 2024))
-
Hong, Junyuan; Duan, Jinhao; Zhang, Chenhui; Li, Zhangheng; Xie, Chulin; Lieberman, Kelsey; Diffenderfer, James; Bartoldson, Brian; Jaiswal, Ajay; Xu, Kaidi; et al (, International Conference on Machine Learning (ICML))
-
Hong, Junyuan; Duan, Jinhao; Zhang, Chenhui; Li, Zhangheng; Xie, Chulin; Lieberman, Kelsey; Diffenderfer, James; Bartoldson, Brian R; Jaiswal, Ajay Kumar; Xu, Kaidi; et al (, Proceedings of Machine Learning Research)Compressing high-capability Large Language Models (LLMs) has emerged as a favored strategy for resource-efficient inferences. While state-of-the-art (SoTA) compression methods boast impressive advancements in preserving benign task performance, the potential risks of compression in terms of safety and trustworthiness have been largely neglected. This study conducts the first, thorough evaluation of three (3) leading LLMs using five (5) SoTA compression techniques across eight (8) trustworthiness dimensions. Our experiments highlight the intricate interplay between compression and trustworthiness, revealing some interesting patterns. We find that quantization is currently a more effective approach than pruning in achieving efficiency and trustworthiness simultaneously. For instance, a 4-bit quantized model retains the trustworthiness of its original counterpart, but model pruning significantly degrades trustworthiness, even at 50% sparsity. Moreover, employing quantization within a moderate bit range could unexpectedly improve certain trustworthiness dimensions such as ethics and fairness. Conversely, extreme quantization to very low bit levels (3 bits) tends to reduce trustworthiness significantly. This increased risk cannot be uncovered by looking at benign performance alone, in turn, mandating comprehensive trustworthiness evaluation in practice. These findings culminate in practical recommendations for simultaneously achieving high utility, efficiency, and trustworthiness in LLMs.more » « less
An official website of the United States government

Full Text Available